732 research outputs found

    Shadoks Approach to Convex Covering

    Full text link
    We describe the heuristics used by the Shadoks team in the CG:SHOP 2023 Challenge. The Challenge consists of 206 instances, each being a polygon with holes. The goal is to cover each instance polygon with a small number of convex polygons. Our general strategy is the following. We find a big collection of large (often maximal) convex polygons inside the instance polygon and then solve several set cover problems to find a small subset of the collection that covers the whole polygon.Comment: SoCG CG:SHOP 2023 Challeng

    On the Combinatorial Complexity of Approximating Polytopes

    Get PDF
    Approximating convex bodies succinctly by convex polytopes is a fundamental problem in discrete geometry. A convex body KK of diameter diam(K)\mathrm{diam}(K) is given in Euclidean dd-dimensional space, where dd is a constant. Given an error parameter Δ>0\varepsilon > 0, the objective is to determine a polytope of minimum combinatorial complexity whose Hausdorff distance from KK is at most Δ⋅diam(K)\varepsilon \cdot \mathrm{diam}(K). By combinatorial complexity we mean the total number of faces of all dimensions of the polytope. A well-known result by Dudley implies that O(1/Δ(d−1)/2)O(1/\varepsilon^{(d-1)/2}) facets suffice, and a dual result by Bronshteyn and Ivanov similarly bounds the number of vertices, but neither result bounds the total combinatorial complexity. We show that there exists an approximating polytope whose total combinatorial complexity is O~(1/Δ(d−1)/2)\tilde{O}(1/\varepsilon^{(d-1)/2}), where O~\tilde{O} conceals a polylogarithmic factor in 1/Δ1/\varepsilon. This is a significant improvement upon the best known bound, which is roughly O(1/Δd−2)O(1/\varepsilon^{d-2}). Our result is based on a novel combination of both old and new ideas. First, we employ Macbeath regions, a classical structure from the theory of convexity. The construction of our approximating polytope employs a new stratified placement of these regions. Second, in order to analyze the combinatorial complexity of the approximating polytope, we present a tight analysis of a width-based variant of B\'{a}r\'{a}ny and Larman's economical cap covering. Finally, we use a deterministic adaptation of the witness-collector technique (developed recently by Devillers et al.) in the context of our stratified construction.Comment: In Proceedings of the 32nd International Symposium Computational Geometry (SoCG 2016) and accepted to SoCG 2016 special issue of Discrete and Computational Geometr

    Efficient Algorithms for Battleship

    Get PDF
    We consider an algorithmic problem inspired by the Battleship game. In the variant of the problem that we investigate, there is a unique ship of shape S⊂Z2S \subset Z^2 which has been translated in the lattice Z2Z^2. We assume that a player has already hit the ship with a first shot and the goal is to sink the ship using as few shots as possible, that is, by minimizing the number of missed shots. While the player knows the shape SS, which position of SS has been hit is not known. Given a shape SS of nn lattice points, the minimum number of misses that can be achieved in the worst case by any algorithm is called the Battleship complexity of the shape SS and denoted c(S)c(S). We prove three bounds on c(S)c(S), each considering a different class of shapes. First, we have c(S)≀n−1c(S) \leq n-1 for arbitrary shapes and the bound is tight for parallelogram-free shapes. Second, we provide an algorithm that shows that c(S)=O(log⁥n)c(S) = O(\log n) if SS is an HV-convex polyomino. Third, we provide an algorithm that shows that c(S)=O(log⁥log⁥n)c(S) = O(\log \log n) if SS is a digital convex set. This last result is obtained through a novel discrete version of the Blaschke-Lebesgue inequality relating the area and the width of any convex body.Comment: Conference version at 10th International Conference on Fun with Algorithms (FUN 2020

    The Cost of Perfection for Matchings in Graphs

    Full text link
    Perfect matchings and maximum weight matchings are two fundamental combinatorial structures. We consider the ratio between the maximum weight of a perfect matching and the maximum weight of a general matching. Motivated by the computer graphics application in triangle meshes, where we seek to convert a triangulation into a quadrangulation by merging pairs of adjacent triangles, we focus mainly on bridgeless cubic graphs. First, we characterize graphs that attain the extreme ratios. Second, we present a lower bound for all bridgeless cubic graphs. Third, we present upper bounds for subclasses of bridgeless cubic graphs, most of which are shown to be tight. Additionally, we present tight bounds for the class of regular bipartite graphs

    On the ratio between maximum weight perfect matchings and maximum weight matchings in grids

    Get PDF
    Given a graph G that admits a perfect matching, we investigate the parameter η(G) (originally motivated by computer graphics applications) which is defined as follows. Among all nonnegative edge weight assignments, η(G) is the minimum ratio between (i) the maximum weight of a perfect matching and (ii) the maximum weight of a general matching. In this paper, we determine the exact value of η for all rectangular grids, all bipartite cylindrical grids, and all bipartite toroidal grids. We introduce several new techniques to this endeavor

    Short Flip Sequences to Untangle Segments in the Plane

    Full text link
    A (multi)set of segments in the plane may form a TSP tour, a matching, a tree, or any multigraph. If two segments cross, then we can reduce the total length with the following flip operation. We remove a pair of crossing segments, and insert a pair of non-crossing segments, while keeping the same vertex degrees. The goal of this paper is to devise efficient strategies to flip the segments in order to obtain crossing-free segments after a small number of flips. Linear and near-linear bounds on the number of flips were only known for segments with endpoints in convex position. We generalize these results, proving linear and near-linear bounds for cases with endpoints that are not in convex position. Our results are proved in a general setting that applies to multiple problems, using multigraphs and the distinction between removal and insertion choices when performing a flip.Comment: 19 pages, 10 figure

    Approximate Convex Intersection Detection with Applications to Width and Minkowski Sums

    Get PDF
    Approximation problems involving a single convex body in R^d have received a great deal of attention in the computational geometry community. In contrast, works involving multiple convex bodies are generally limited to dimensions d 0, we show how to independently preprocess two polytopes A,B subset R^d into data structures of size O(1/epsilon^{(d-1)/2}) such that we can answer in polylogarithmic time whether A and B intersect approximately. More generally, we can answer this for the images of A and B under affine transformations. Next, we show how to epsilon-approximate the Minkowski sum of two given polytopes defined as the intersection of n halfspaces in O(n log(1/epsilon) + 1/epsilon^{(d-1)/2 + alpha}) time, for any constant alpha > 0. Finally, we present a surprising impact of these results to a well studied problem that considers a single convex body. We show how to epsilon-approximate the width of a set of n points in O(n log(1/epsilon) + 1/epsilon^{(d-1)/2 + alpha}) time, for any constant alpha > 0, a major improvement over the previous bound of roughly O(n + 1/epsilon^{d-1}) time

    Optimal Area-Sensitive Bounds for Polytope Approximation

    Full text link
    Approximating convex bodies is a fundamental question in geometry and has a wide variety of applications. Given a convex body KK of diameter Δ\Delta in Rd\mathbb{R}^d for fixed dd, the objective is to minimize the number of vertices (alternatively, the number of facets) of an approximating polytope for a given Hausdorff error Δ\varepsilon. The best known uniform bound, due to Dudley (1974), shows that O((Δ/Δ)(d−1)/2)O((\Delta/\varepsilon)^{(d-1)/2}) facets suffice. While this bound is optimal in the case of a Euclidean ball, it is far from optimal for ``skinny'' convex bodies. A natural way to characterize a convex object's skinniness is in terms of its relationship to the Euclidean ball. Given a convex body KK, define its surface diameter Δd−1\Delta_{d-1} to be the diameter of a Euclidean ball of the same surface area as KK. It follows from generalizations of the isoperimetric inequality that Δ≄Δd−1\Delta \geq \Delta_{d-1}. We show that, under the assumption that the width of the body in any direction is at least Δ\varepsilon, it is possible to approximate a convex body using O((Δd−1/Δ)(d−1)/2)O((\Delta_{d-1}/\varepsilon)^{(d-1)/2}) facets. This bound is never worse than the previous bound and may be significantly better for skinny bodies. The bound is tight, in the sense that for any value of Δd−1\Delta_{d-1}, there exist convex bodies that, up to constant factors, require this many facets. The improvement arises from a novel approach to sampling points on the boundary of a convex body. We employ a classical concept from convexity, called Macbeath regions. We demonstrate that Macbeath regions in KK and KK's polar behave much like polar pairs. We then apply known results on the Mahler volume to bound their number

    Approximate Nearest Neighbor Searching with Non-Euclidean and Weighted Distances

    Full text link
    We present a new approach to approximate nearest-neighbor queries in fixed dimension under a variety of non-Euclidean distances. We are given a set SS of nn points in Rd\mathbb{R}^d, an approximation parameter Δ>0\varepsilon > 0, and a distance function that satisfies certain smoothness and growth-rate assumptions. The objective is to preprocess SS into a data structure so that for any query point qq in Rd\mathbb{R}^d, it is possible to efficiently report any point of SS whose distance from qq is within a factor of 1+Δ1+\varepsilon of the actual closest point. Prior to this work, the most efficient data structures for approximate nearest-neighbor searching in spaces of constant dimensionality applied only to the Euclidean metric. This paper overcomes this limitation through a method called convexification. For admissible distance functions, the proposed data structures answer queries in logarithmic time using O(nlog⁥(1/Δ)/Δd/2)O(n \log (1 / \varepsilon) / \varepsilon^{d/2}) space, nearly matching the best known bounds for the Euclidean metric. These results apply to both convex scaling distance functions (including the Mahalanobis distance and weighted Minkowski metrics) and Bregman divergences (including the Kullback-Leibler divergence and the Itakura-Saito distance)

    Complexity dichotomy on partial grid recognition

    Get PDF
    Deciding whether a graph can be embedded in a grid using only unit-length edges is NP-complete, even when restricted to binary trees. However, it is not difficult to devise a number of graph classes for which the problem is polynomial, even trivial. A natural step, outstanding thus far, was to provide a broad classification of graphs that make for polynomial or NP-complete instances. We provide such a classification based on the set of allowed vertex degrees in the input graphs, yielding a full dichotomy on the complexity of the problem. As byproducts, the previous NP-completeness result for binary trees was strengthened to strictly binary trees, and the three-dimensional version of the problem was for the first time proven to be NP-complete. Our results were made possible by introducing the concepts of consistent orientations and robust gadgets, and by showing how the former allows NP-completeness proofs by local replacement even in the absence of the latter
    • 

    corecore